为了解决疫苗犹豫不决,这会损害COVID-19疫苗接种运动的努力,必须了解公共疫苗接种态度并及时掌握其变化。尽管具有可靠性和可信赖性,但基于调查的传统态度收集是耗时且昂贵的,无法遵循疫苗接种态度的快速发展。我们利用社交媒体上的文本帖子通过提出深入学习框架来实时提取和跟踪用户的疫苗接种立场。为了解决与疫苗相关话语中常用的讽刺和讽刺性的语言特征的影响,我们将用户社交网络邻居的最新帖子集成到框架中,以帮助检测用户的真实态度。根据我们从Twitter的注释数据集,与最新的仅文本模型相比,从我们框架实例化的模型可以提高态度提取的性能高达23%。使用此框架,我们成功地验证了使用社交媒体跟踪现实生活中疫苗接种态度的演变的可行性。我们进一步显示了对我们的框架的一种实际用途,它可以通过从社交媒体中感知到的信息来预测用户疫苗犹豫的变化的可能性。
translated by 谷歌翻译
疫苗的犹豫被认为是欧洲和美国在欧洲疫苗充足疫苗的疫苗停滞比率停滞的主要原因之一。快速准确地掌握公众对疫苗接种的态度对于解决疫苗犹豫至关重要,社交媒体平台已被证明是公众意见的有效来源。在本文中,我们描述了与Covid-19疫苗有关的推文数据集的收集和发布。该数据集由从西欧收集的2,198,090条推文组成,其中17,934条带有发起者的疫苗接种立场。我们的注释将有助于使用和开发数据驱动的模型来从社交媒体帖子中提取疫苗接种态度,从而进一步确认社交媒体在公共卫生监视中的力量。为了为未来的研究奠定基础,我们不仅对数据集进行了统计分析和可视化,而且还评估和比较了疫苗接种立场提取中已建立的基于文本的基准测试的性能。我们在实践中证明了我们的数据的一种潜在用途,以跟踪公共Covid-19-19疫苗接种态度的时间变化。
translated by 谷歌翻译
Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement.
translated by 谷歌翻译
Shape can specify key object constraints, yet existing text-to-image diffusion models ignore this cue and synthesize objects that are incorrectly scaled, cut off, or replaced with background content. We propose a training-free method, Shape-Guided Diffusion, which uses a novel Inside-Outside Attention mechanism to constrain the cross-attention (and self-attention) maps such that prompt tokens (and pixels) referring to the inside of the shape cannot attend outside the shape, and vice versa. To demonstrate the efficacy of our method, we propose a new image editing task where the model must replace an object specified by its mask and a text prompt. We curate a new ShapePrompts benchmark based on MS-COCO and achieve SOTA results in shape faithfulness, text alignment, and realism according to both quantitative metrics and human preferences. Our data and code will be made available at https://shape-guided-diffusion.github.io.
translated by 谷歌翻译
测试时间适应利用测试输入,以提高对源数据进行训练的模型的准确性,这些模型在转移的目标数据上进行了测试。现有方法通过(重新)对每个目标域进行培训来更新源模型。虽然有效,但重新训练对数据的数量和顺序和优化的超参数敏感。相反,我们通过使用生成扩散模型将所有测试输入投影到源域来更新目标数据。我们的扩散驱动的适应方法DDA共享其在所有领域的分类和生成模型。两种模型均在源域上训练,然后在测试过程中固定。我们通过图像指导和自我缩放来自动决定适应多少。 DDA的输入适应比在Imagenet-C基准上的各种损坏,体系结构和数据制度中的先前模型适应方法更强大。借助其输入更新,DDA成功了,在小批次中的数据中,模型适应性降低了,以较少的数据降低,以非统一顺序或具有多个损坏的混合数据降低。
translated by 谷歌翻译
我们介绍Artbench-10,这是一流的平衡,高质量的,清洁的注释和标准化数据集,用于基准艺术品生成。它包括60,000幅艺术品图像,来自10种独特的艺术风格,每种样式的训练图像和1,000张测试图像。 Artbench-10比以前的艺术品数据集具有多个优势。首先,它是平衡的,而大多数以前的艺术品数据集都遭受了长时间的分布。其次,这些图像具有高质量,并带有干净的注释。第三,ArtBench-10是由标准化数据收集,注释,过滤和预处理程序创建的。我们提供三个版本的数据集,具有不同的分辨率($ 32 \ times32 $,$ 256 \ times256 $和原始图像尺寸),并以一种易于通过流行的机器学习框架来合并的方式。我们还使用具有ArtBench-10的代表性图像合成模型进行了广泛的基准测试实验,并进行了深入分析。该数据集可从https://github.com/liaopeiyuan/artbench获得公平使用许可证。
translated by 谷歌翻译
预先培训用于学习可转让的视频文本表示的模型,以近年来引起了很多关注。以前的主导作品主要采用两个独立的编码器来有效检索,但忽略视频和文本之间的本地关联。另一种研究使用联合编码器与文本交互视频,但是由于每个文本视频对需要馈送到模型中的低效率。在这项工作中,我们能够通过新颖的借口任务进行微粒视频文本交互,以便通过新颖的借口任务进行检索,称为多项选择题(MCQ),其中参数模块BridgeFormer培训以接受由此构建的“问题”。文本功能通过诉诸视频功能。具体来说,我们利用了文本的丰富语义(即,名词和动词)来构建问题,可以培训视频编码器以捕获更多区域内容和时间动态。以问题和答案的形式,可以正确建立本地视频文本功能之间的语义关联。 BridgeFormer能够删除下游检索,只有两个编码器渲染高效且灵活的模型。我们的方法在具有不同实验设置(即零拍摄和微调)的五个数据集中,在五个数据集中优于最先进的方法,包括不同的实验设置(即零拍摄和微调),包括HOWTO100M(一百万个视频)。我们进一步开展零射击动作识别,可以作为视频到文本检索,我们的方法也显着超越了其对应物。作为额外的好处,我们的方法在单模下游任务中实现了竞争力,在单模下游任务上具有更短的预训练视频,例如,使用线性评估的动作识别。
translated by 谷歌翻译
可控图像合成模型允许根据文本指令或来自示例图像的指导创建不同的图像。最近,已经显示出去噪扩散概率模型比现有方法产生更现实的图像,并且已在无条件和类条件设置中成功展示。我们探索细粒度,连续控制该模型类,并引入了一种新颖的统一框架,用于语义扩散指导,允许语言或图像指导,或两者。使用图像文本或图像匹配分数的梯度将指导注入预训练的无条件扩散模型中。我们探讨基于剪辑的文本指导,以及以统一形式的基于内容和类型的图像指导。我们的文本引导综合方法可以应用于没有相关文本注释的数据集。我们对FFHQ和LSUN数据集进行实验,并显示出细粒度的文本引导图像合成的结果,与样式或内容示例图像相关的图像的合成,以及具有文本和图像引导的示例。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译